Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 13, 2026
-
Free, publicly-accessible full text available April 24, 2026
-
Free, publicly-accessible full text available April 24, 2026
-
Neural language models (LMs) represent facts about the world described by text. Sometimes these facts derive from training data (in most LMs, a representation of the word banana encodes the fact that bananas are fruits). Sometimes facts derive from input text itself (a representation of the sentence I poured out the bottle encodes the fact that the bottle became empty). We describe REMEDI, a method for learning to map statements in natural language to fact encodings in an LM’s internal representation system. REMEDI encodings can be used as knowledge editors: when added to LM hidden representations, they modify downstream generation to be consistent with new facts. REMEDI encodings may also be used as probes: when compared to LM representations, they reveal which properties LMs already attribute to mentioned entities, in some cases making it possible to predict when LMs will generate outputs that conflict with background knowledge or input text. REMEDI thus links work on probing, prompting, and LM editing, and offers steps toward general tools for fine-grained inspection and control of knowledge in LMs.more » « less
-
Free, publicly-accessible full text available November 6, 2025
-
We propose an interactive approach to language learning that utilizes linguistic acceptability judgments from an informant (a competent lan- guage user) to learn a grammar. Given a gram- mar formalism and a framework for synthesiz- ing data, our model iteratively selects or synthe- sizes a data-point according to one of a range of information-theoretic policies, asks the in- formant for a binary judgment, and updates its own parameters in preparation for the next query. We demonstrate the effectiveness of our model in the domain of phonotactics, the rules governing what kinds of sound-sequences are acceptable in a language, and carry out two experiments, one with typologically-natural linguistic data and another with a range of procedurally-generated languages. We find that the information-theoretic policies that our model uses to select items to query the infor- mant achieve sample efficiency comparable to, and sometimes greater than, fully supervised approaches.more » « less
An official website of the United States government

Full Text Available